3 research outputs found

    A maximum likelihood based technique for validating detrended fluctuation analysis (ML-DFA)

    Get PDF
    Detrended Fluctuation Analysis (DFA) is widely used to assess the presence of long-range temporal correlations in time series. Signals with long-range temporal correlations are typically defined as having a power law decay in their autocorrelation function. The output of DFA is an exponent, which is the slope obtained by linear regression of a log-log fluctuation plot against window size. However, if this fluctuation plot is not linear, then the underlying signal is not self-similar, and the exponent has no meaning. There is currently no method for assessing the linearity of a DFA fluctuation plot. Here we present such a technique, called ML-DFA. We scale the DFA fluctuation plot to construct a likelihood function for a set of alternative models including polynomial, root, exponential, logarithmic and spline functions. We use this likelihood function to determine the maximum likelihood and thus to calculate values of the Akaike and Bayesian information criteria, which identify the best fit model when the number of parameters involved is taken into account and over-fitting is penalised. This ensures that, of the models that fit well, the least complicated is selected as the best fit. We apply ML-DFA to synthetic data from FARIMA processes and sine curves with DFA fluctuation plots whose form has been analytically determined, and to experimentally collected neurophysiological data. ML-DFA assesses whether the hypothesis of a linear fluctuation plot should be rejected, and thus whether the exponent can be considered meaningful. We argue that ML-DFA is essential to obtaining trustworthy results from DFA.Comment: 22 pages, 7 figure

    Markers of criticality in phase synchronization

    Get PDF
    The concept of the brain as a critical dynamical system is very attractive because systems close to criticality are thought to maximize their dynamic range of information processing and communication. To date, there have been two key experimental observations in support of this hypothesis: (i) neuronal avalanches with power law distribution of size and (ii) long-range temporal correlations (LRTCs) in the amplitude of neural oscillations. The case for how these maximize dynamic range of information processing and communication is still being made and because a significant substrate for information coding and transmission is neural synchrony it is of interest to link synchronization measures with those of criticality. We propose a framework for characterizing criticality in synchronization based on an analysis of the moment-to-moment fluctuations of phase synchrony in terms of the presence of LRTCs. This framework relies on an estimation of the rate of change of phase difference and a set of methods we have developed to detect LRTCs. We test this framework against two classical models of criticality (Ising and Kuramoto) and recently described variants of these models aimed to more closely represent human brain dynamics. From these simulations we determine the parameters at which these systems show evidence of LRTCs in phase synchronization. We demonstrate proof of principle by analysing pairs of human simultaneous EEG and EMG time series, suggesting that LRTCs of corticomuscular phase synchronization can be detected in the resting state and experimentally manipulated. The existence of LRTCs in fluctuations of phase synchronization suggests that these fluctuations are governed by non-local behavior, with all scales contributing to system behavior. This has important implications regarding the conditions under which one should expect to see LRTCs in phase synchronization. Specifically, brain resting states may exhibit LRTCs reflecting a state of readiness facilitating rapid task-dependent shifts toward and away from synchronous states that abolish LRTCs
    corecore